Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
大多数NLP使用稀疏或密集文本表示的实体链接和核心分辨率的方法都集中在检索类似的提及上。例如,常见的“ Wikification”任务会为每个实体提及候选Wikipedia文章。对于许多域,例如书目引用,缺乏对每个实体的广泛文本描述的权威列表,并且命名为模棱两可的实体主要发生在其他命名实体的背景下。因此,与先前的工作不同,我们试图利用从文本证据中获得的个人网络可以从文本证据中获得的信息,以消除名称。我们将基于BERT的提及表示与各种图形归纳策略结合在一起,并通过监督和无监督的集群推理方法进行实验。我们试验了来自两个领域名称列表的数据:来自CrossRef的书目引用和传播链(ISNADS)来自古典阿拉伯历史。我们发现,预处理的内域语言模型可以显着改善提及的表示形式,尤其是对于较大的语料库,并且参考书目信息(例如出版物场所或标题)的可用性也可以提高此任务的性能。我们还提出了一种新颖的监督集群推理模型,该模型为少量计算工作提供了竞争性能,使其非常适合必须在不依赖详尽的权限列表的情况下确定个人的情况。
translated by 谷歌翻译
分类问题在自然语言处理中极为普遍,并使用各种重新采样和过滤技术解决,这通常涉及做出有关如何选择培训数据或确定该模型应标记哪些测试示例的决策。我们研究了训练样本和过滤器训练和测试数据中涉及的模型性能的权衡,并在严重失衡的令牌分类任务中进行了测试,并检查了这些权衡的幅度与感兴趣现象的基本率之间的关系。在对序列标签的实验以检测英语和阿拉伯文本中的稀有现象,我们发现选择培训数据的不同方法带来了有效性和效率的权衡。我们还看到,在高度不平衡的情况下,使用第一频繁检索模型过滤测试数据对于模型性能与选择训练数据一样重要。稀有正类别的基本率对训练或测试数据的选择引起的性能变化的大小有明显的影响。随着基本利率的增加,这些选择带来的差异会减少。
translated by 谷歌翻译
在自动驾驶符号识别等任务中,强大的分类至关重要,因为错误分类的弊端可能是严重的。对抗性攻击威胁着神经网络分类器的鲁棒性,导致它们始终如一,自信地误导了道路标志。一种这样的攻击,基于阴影的攻击,通过应用自然的阴影来输入图像引起误解,从而导致人类观察者看起来很自然,但对这些分类器感到困惑。当前针对此类攻击的防御能力采用简单的对抗训练程序,分别在GTSRB和LISA测试集上获得相当低的25 \%和40 \%的鲁棒性。在本文中,我们提出了一种健壮,快速且可推广的方法,旨在在道路标志识别的背景下防御阴影攻击,以增强具有二进制自适应阈值和边缘图的源图像。我们从经验上表明了它针对影子攻击的稳健性,并重新制定了该问题,以表明其相似性$ \ varepsilon $基于扰动的攻击。实验结果表明,我们的边缘防御能力达到78 \%的鲁棒性,同时在GTSRB测试集上保持98 \%的良性测试精度,这是我们阈值防御的类似结果。链接到我们的代码是在论文中。
translated by 谷歌翻译
自适应实例归一化(ADAIN)已成为样式注入的标准方法:通过通过缩放和迁移操作重新归一化功能,它发现在样式传输,图像生成和图像到图像转换中广泛使用。在这项工作中,我们提出了Adain的概括,该概括依赖于我们配音的美白和着色转化(WCT),我们将其申请在大型gan中申请样式注射。我们通过对Starganv2体系结构的实验来展示这种概括(尽管在概念上很简单,但在生成的图像的质量上都显着改善。
translated by 谷歌翻译
我们表明,降噪扩散Probabalistic模型(DDPM),一类基于分数的生成模型,可用于制作逼真的假尚图像星系的模拟观测。我们的方法与从河外调查(探针)样品从斯隆数字巡天选择的测光和旋转曲线的观察和星系星系暗能量光谱仪器GRZ成像测试。主观上,当与来自真正的数据集中样品相比所产生的星系高度逼真。我们从深生成学习文学借款,使用'神父\“echet盗梦空间距离”,以测试主观和形态相似性量化的相似性。我们还引进了`合成银河的距离”这一指标来比较新兴的物理性质(如总大小,颜色和半光半径)地面实况父母和子女合成数据集。我们认为,DDPM方法产生比其它生成方法如对抗性网络(与更昂贵的推理的下侧)更清晰,更逼真的图像,并且可以用于产生适合于特定的成像调查合成的观察大样本。我们证明了DDPM的两个潜在的用途:(1)在准确喷漆遮蔽数据,如卫星路径,和(2)域转移,其中新的输入图像可以被处理以模仿DDPM训练集的属性。在这里,我们`DESI-FY”卡通形象为理念的域转移的证明。最后,我们建议适用于可在天文学界内有关这个主题的激励进一步的研究基于分数的办法的潜在应用。
translated by 谷歌翻译
主动推断是建模生物学和人造药物的行为的概率框架,该框架源于最小化自由能的原理。近年来,该框架已成功地应用于各种情况下,其目标是最大程度地提高奖励,提供可比性,有时甚至是卓越的性能与替代方法。在本文中,我们通过演示如何以及何时进行主动推理代理执行最佳奖励的动作来阐明奖励最大化和主动推断之间的联系。确切地说,我们展示了主动推理为Bellman方程提供最佳解决方案的条件 - 这种公式是基于模型的增强学习和控制的几种方法。在部分观察到的马尔可夫决策过程中,标准的主动推理方案可以为计划视野1的最佳动作产生最佳动作,但不能超越。相比之下,最近开发的递归活跃推理方案(复杂的推理)可以在任何有限的颞范围内产生最佳作用。我们通过讨论主动推理和强化学习之间更广泛的关系来补充分析。
translated by 谷歌翻译
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
translated by 谷歌翻译
In this work, we introduce a hypergraph representation learning framework called Hypergraph Neural Networks (HNN) that jointly learns hyperedge embeddings along with a set of hyperedge-dependent embeddings for each node in the hypergraph. HNN derives multiple embeddings per node in the hypergraph where each embedding for a node is dependent on a specific hyperedge of that node. Notably, HNN is accurate, data-efficient, flexible with many interchangeable components, and useful for a wide range of hypergraph learning tasks. We evaluate the effectiveness of the HNN framework for hyperedge prediction and hypergraph node classification. We find that HNN achieves an overall mean gain of 7.72% and 11.37% across all baseline models and graphs for hyperedge prediction and hypergraph node classification, respectively.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have become increasingly important in recent years due to their state-of-the-art performance on many important downstream applications. Existing GNNs have mostly focused on learning a single node representation, despite that a node often exhibits polysemous behavior in different contexts. In this work, we develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph. Such disentangled representations are more interpretable and useful than a single embedding. Furthermore, PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings. The framework is flexible enough and the general design helps in the wide applicability of the learned embeddings to suit the domain. We utilize publicly available benchmark datasets to evaluate our approach and against a variety of baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction where we achieve an average gain of 15% while remaining competitive for node classification. Finally, we also demonstrate the utility of PersonaSAGE with a case study for personalized recommendation of different entity types in a data management platform.
translated by 谷歌翻译